Convergence Rate of Coefficient Regularized Kernel-based Learning Algorithms
نویسندگان
چکیده
We investigate machine learning for the least square regression with data dependent hypothesis and coefficient regularization algorithms based on general kernels. We provide some estimates for the learning raters of both regression and classification when the hypothesis spaces are sample dependent. Under a weak condition on the kernels we derive learning error by estimating the rate of some K-functional when the target functions belong to the range of some Hilbert-Schmidt integral operator.
منابع مشابه
Two Novel Learning Algorithms for CMAC Neural Network Based on Changeable Learning Rate
Cerebellar Model Articulation Controller Neural Network is a computational model of cerebellum which acts as a lookup table. The advantages of CMAC are fast learning convergence, and capability of mapping nonlinear functions due to its local generalization of weight updating, single structure and easy processing. In the training phase, the disadvantage of some CMAC models is unstable phenomenon...
متن کاملOn the Convergence Rate of Kernel-Based Sequential Greedy Regression
and Applied Analysis 3 The kernel-based greedy algorithm can be summarized as below. Let t be a stopping time and let β be a positive constant. Set f̂0 β 0. And then for τ 1, 2, . . . , t, define ĥτ , α̂τ , β̂τ argmin h∈Ĥ,0≤α≤1,0≤β′≤β Ez ( 1 − α f̂ τ−1 β αβ′h ) , f̂ τ β 1 − α̂τ f̂ τ−1 β α̂τ β̂τ ĥτ . 1.6 Different from the regularized algorithms in 6, 12, 14–18 , the above learning algorithm tries to rea...
متن کاملRegularized Policy Iteration
In this paper we consider approximate policy-iteration-based reinforcement learning algorithms. In order to implement a flexible function approximation scheme we propose the use of non-parametric methods with regularization, providing a convenient way to control the complexity of the function approximator. We propose two novel regularized policy iteration algorithms by addingL-regularization to...
متن کاملConvergence analysis of online algorithms
In this paper, we are interested in the analysis of regularized online algorithms associated with reproducing kernel Hilbert spaces. General conditions on the loss function and step sizes are given to ensure convergence. Explicit learning rates are also given for particular step sizes.
متن کاملSequential and Mixed Genetic Algorithm and Learning Automata (SGALA, MGALA) for Feature Selection in QSAR
Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as: GA, PSO, ACO, SA and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR f...
متن کامل